AAAI.2016 - Search and Constraint Satisfaction

Total: 11

#1 On the Extraction of One Maximal Information Subset That Does Not Conflict with Multiple Contexts [PDF] [Copy] [Kimi]

Authors: Éric Grégoire ; Yacine Izza ; Jean-Marie Lagniez

The efficient extraction of one maximal information subset that does not conflict with multiple contxts or additional information sources is a key basic issue in many A.I. domains, especially when these contexts or sources can be mutually conflicting. In this paper, this question is addressed from a computational point of view in clausal Boolean logic. A new approach is introduced that experimentally outperforms the currently most efficient technique.

#2 The Meta-Problem for Conservative Mal'tsev Constraints [PDF] [Copy] [Kimi]

Author: Clement Carbonnel

In the algebraic approach to CSP (Constraint Satisfaction Problem), the complexity of constraint languages is studied using closure operations called polymorphisms. Many of these operations are known to induce tractability of any language they preserve. We focus on the meta-problem: given a language G, decide if G has a polymorphism with nice properties. We design an algorithm that decides in polynomial-time if a constraint language has a conservative Mal'tsev polymorphism, and outputs one if one exists. As a corollary we obtain that the class of conservative Mal'tsev constraints is uniformly tractable, and we conjecture that this result remains true in the non-conservative case.

#3 Counting-Based Search for Constraint Optimization Problems [PDF] [Copy] [Kimi]

Author: Gilles Pesant

Branching heuristics based on counting solutions in constraints have been quite good at guiding search to solve constraint satisfaction problems. But do they perform as well for constraint optimization problems? We propose an adaptation of counting-based search for optimization, show how to modify solution density computation for some of the most frequently-occurring constraints, and empirically evaluate its performance on several benchmark problems.

#4 Alternative Filtering for the Weighted Circuit Constraint: Comparing Lower Bounds for the TSP and Solving TSPTW [PDF] [Copy] [Kimi]

Authors: Sylvain Ducomman ; Hadrien Cambazard ; Bernard Penz

Many problems, and in particular routing problems, require to find one or many circuits in a weighted graph. The weights often express the distance or the travel time between vertices. We propose in this paper various filtering algorithms for the weighted circuit constraint which maintain a circuit in a weighted graph. The filtering algorithms are typical cost based filtering algorithms relying on relaxations of the Traveling Salesman Problem. We investigate three bounds and show that they are incomparable. In particular we design a filtering algorithm based on a lower bound introduced in 1981 by Christophides et al.. This bound can provide stronger filtering than the classical Held and Karp’s approach when additional information, such as the possible positions of the clients in the tour, is available. This is particularly suited for problems with side constraints such as time windows.

#5 Steiner Tree Problems with Side Constraints Using Constraint Programming [PDF] [Copy] [Kimi]

Authors: Diego de Uña ; Graeme Gange ; Peter Schachte ; Peter Stuckey

The Steiner Tree Problem is a well know NP-complete problem that is well studied and for which fast algorithms are already available. Nonetheless, in the real world the Steiner Tree Problem is almost always accompanied by side constraints which means these approaches cannot be applied. For many problems with side constraints, only approximation algorithms are known. We introduce here a propagator for the tree constraint with explanations, as well as lower bounding techniques and a novel constraint programming approach for the Steiner Tree Problem and two of its variants. We find our propagators with explanations are highly advantageous when it comes to solving variants of this problem.

#6 Bidirectional Search That Is Guaranteed to Meet in the Middle [PDF] [Copy] [Kimi]

Authors: Robert Holte ; Ariel Felner ; Guni Sharon ; Nathan Sturtevant

We present MM, the first bidirectional heuristic search algorithm whose forward and backward searches are guaranteed to ''meet in the middle'', i.e. never expand a node beyond the solution midpoint. We also present a novel framework for comparing MM, A*, and brute-force search, and identify conditions favoring each algorithm. Finally, we present experimental results that support our theoretical analysis.

#7 Increasing Nogoods in Restart-Based Search [PDF] [Copy] [Kimi]

Authors: Jimmy Lee ; Christian Schulte ; Zichen Zhu

Restarts are an important technique to make search more robust. This paper is concerned with how to maintain and propagate nogoods recorded from restarts efficiently. It builds on reduced nld-nogoods introduced for restarts and increasing nogoods introduced for symmetry breaking. The paper shows that reduced nld-nogoods extracted from a single restart are in fact increasing, which can thus benefit from the efficient propagation algorithm of the incNGs global constraint. We present a lighter weight filtering algorithm for incNGs in the context of restart-based search using dynamic event sets (dynamic subscriptions). We show formally that the lightweight version enforces GAC on each nogood while reducing the number of subscribed decisions. The paper also introduces an efficient approximation to nogood minimization such that all shortened reduced nld-nogoods from the same restart are also increasing and can be propagated with the new filtering algorithm. Experimental results confirm that our lightweight filtering algorithm and approximated nogood minimization successfully trade a slight loss in pruning for considerably better efficiency, and hence compare favorably against existing state-of-the-art techniques.

#8 Breaking More Composition Symmetries Using Search Heuristics [PDF] [Copy] [Kimi]

Authors: Jimmy Lee ; Zichen Zhu

The pruning power of partial symmetry breaking depends on the given subset of symmetries to break as well as the interactions among symmetry breaking constraints. In the context of Partial Symmetry Breaking During Search (ParSBDS), the search order determines the set of symmetry breaking constraints to add and thus also makes an impact on node and solution pruning. In this paper, we give the first formal characterization of the pruning behavior of ParSBDS and its improved variants. Introducing the notion of Dominance-Completeness (DC-ness), we show that ParSBDS and variants eliminate the symmetry group of the given subset of symmetries if the resultant search tree is DC, and give an example scenario. Unfortunately, building a DC tree is not always possible. We propose two search heuristics with the aim of having more nodes dominated and thus also pruned during search. Extensive experimentation demonstrates how the proposed heuristics and their combination can drastically reduce the solution set size, search space and runtime when compared against the state-of-the-art static and dynamic symmetry breaking methods.

#9 Exponential Recency Weighted Average Branching Heuristic for SAT Solvers [PDF] [Copy] [Kimi]

Authors: Jia Liang ; Vijay Ganesh ; Pascal Poupart ; Krzysztof Czarnecki

Modern conflict-driven clause-learning SAT solvers routinely solve large real-world instances with millions of clauses and variables in them. Their success crucially depends on effective branching heuristics. In this paper, we propose a new branching heuristic inspired by the exponential recency weighted average algorithm used to solve the bandit problem. The branching heuristic, we call CHB, learns online which variables to branch on by leveraging the feedback received from conflict analysis. We evaluated CHB on 1200 instances from the SAT Competition 2013 and 2014 instances, and showed that CHB solves significantly more instances than VSIDS, currently the most effective branching heuristic in widespread use. More precisely, we implemented CHB as part of the MiniSat and Glucose solvers, and performed an apple-to-apple comparison with their VSIDS-based variants. CHB-based MiniSat (resp. CHB-based Glucose) solved approximately 16.1% (resp. 5.6%) more instances than their VSIDS-based variants. Additionally, CHB-based solvers are much more efficient at constructing first preimage attacks on step-reduced SHA-1 and MD5 cryptographic hash functions, than their VSIDS-based counterparts. To the best of our knowledge, CHB is the first branching heuristic to solve significantly more instances than VSIDS on a large, diverse benchmark of real-world instances.

#10 Using the Shapley Value to Analyze Algorithm Portfolios [PDF] [Copy] [Kimi]

Authors: Alexandre Fréchette ; Lars Kotthoff ; Tomasz Michalak ; Talal Rahwan ; Holger Hoos ; Kevin Leyton-Brown

Algorithms for NP-complete problems often have different strengths andweaknesses, and thus algorithm portfolios often outperform individualalgorithms. It is surprisingly difficult to quantify a component algorithm's contributionto such a portfolio. Reporting a component's standalone performance wronglyrewards near-clones while penalizing algorithms that have small but distinctareas of strength. Measuring a component's marginal contribution to an existingportfolio is better, but penalizes sets of strongly correlated algorithms,thereby obscuring situations in which it is essential to have at least onealgorithm from such a set. This paper argues for analyzing component algorithmcontributions via a measure drawn from coalitional game theory---the Shapleyvalue---and yields insight into a research community's progress over time. Weconclude with an application of the analysis we advocate to SAT competitions,yielding novel insights into the behaviour of algorithm portfolios, theircomponents, and the state of SAT solving technology.

#11 Component Caching in Hybrid Domains with Piecewise Polynomial Densities [PDF] [Copy] [Kimi]

Authors: Vaishak Belle ; Guy Van den Broeck ; Andrea Passerini

Counting the models of a propositional formula is an important problem: for example, it serves as the backbone of probabilistic inference by weighted model counting. A key algorithmic insight is component caching (CC), in which disjoint components of a formula, generated dynamically during a DPLL search, are cached so that they only have to be solved once. In the recent years, driven by SMT technology and probabilistic inference in hybrid domains, there is an increasing interest in counting the models of linear arithmetic sentences. To date, however, solvers for these are block-clause implementations, which are nonviable on large problem instances. In this paper, as a first step in extending CC to hybrid domains, we show how propositional CC systems can be leveraged when limited to piecewise polynomial densities. Our experiments demonstrate a large gap in performance when compared to existing approaches based on a variety of block-clause strategies.